93 research outputs found

    Effective Spoken Language Labeling with Deep Recurrent Neural Networks

    Full text link
    Understanding spoken language is a highly complex problem, which can be decomposed into several simpler tasks. In this paper, we focus on Spoken Language Understanding (SLU), the module of spoken dialog systems responsible for extracting a semantic interpretation from the user utterance. The task is treated as a labeling problem. In the past, SLU has been performed with a wide variety of probabilistic models. The rise of neural networks, in the last couple of years, has opened new interesting research directions in this domain. Recurrent Neural Networks (RNNs) in particular are able not only to represent several pieces of information as embeddings but also, thanks to their recurrent architecture, to encode as embeddings relatively long contexts. Such long contexts are in general out of reach for models previously used for SLU. In this paper we propose novel RNNs architectures for SLU which outperform previous ones. Starting from a published idea as base block, we design new deep RNNs achieving state-of-the-art results on two widely used corpora for SLU: ATIS (Air Traveling Information System), in English, and MEDIA (Hotel information and reservation in France), in French.Comment: 8 pages. Rejected from IJCAI 2017, good remarks overall, but slightly off-topic as from global meta-reviews. Recommendations: 8, 6, 6, 4. arXiv admin note: text overlap with arXiv:1706.0174

    TArC: Incrementally and Semi-Automatically Collecting a Tunisian Arabish Corpus

    Full text link
    This article describes the constitution process of the first morpho-syntactically annotated Tunisian Arabish Corpus (TArC). Arabish, also known as Arabizi, is a spontaneous coding of Arabic dialects in Latin characters and arithmographs (numbers used as letters). This code-system was developed by Arabic-speaking users of social media in order to facilitate the writing in the Computer-Mediated Communication (CMC) and text messaging informal frameworks. There is variety in the realization of Arabish amongst dialects, and each Arabish code-system is under-resourced, in the same way as most of the Arabic dialects. In the last few years, the focus on Arabic dialects in the NLP field has considerably increased. Taking this into consideration, TArC will be a useful support for different types of analyses, computational and linguistic, as well as for NLP tools training. In this article we will describe preliminary work on the TArC semi-automatic construction process and some of the first analyses we developed on TArC. In addition, in order to provide a complete overview of the challenges faced during the building process, we will present the main Tunisian dialect characteristics and their encoding in Tunisian Arabish.Comment: Paper accepted at the Language Resources and Evaluation Conference (LREC) 202

    Encoding Sentence Position in Context-Aware Neural Machine Translation with Concatenation

    Full text link
    Context-aware translation can be achieved by processing a concatenation of consecutive sentences with the standard Transformer architecture. This paper investigates the intuitive idea of providing the model with explicit information about the position of the sentences contained in the concatenation window. We compare various methods to encode sentence positions into token representations, including novel methods. Our results show that the Transformer benefits from certain sentence position encoding methods on English to Russian translation if trained with a context-discounted loss (Lupo et al., 2022). However, the same benefits are not observed in English to German. Further empirical efforts are necessary to define the conditions under which the proposed approach is beneficial.Comment: Insights2023 camera-read

    Multi-Task sequence prediction for Tunisian Arabizi multi-level annotation

    Get PDF
    In this paper we propose a multi-task sequence prediction system, based on recurrent neural networks and used to annotate on multiple levels an Arabizi Tunisian corpus. The annotation performed are text classification, tokenization, PoS tagging and encoding of Tunisian Arabizi into CODA* Arabic orthography. The system is learned to predict all the annotation levels in cascade, starting from Arabizi input. We evaluate the system on the TIGER German corpus, suitably converting data to have a multi-task problem, in order to show the effectiveness of our neural architecture. We show also how we used the system in order to annotate a Tunisian Arabizi corpus, which has been afterwards manually corrected and used to further evaluate sequence models on Tunisian data. Our system is developed for the Fairseq framework, which allows for a fast and easy use for any other sequence prediction problem

    Discriminative Reranking for Spoken Language Understanding

    Full text link

    Modèles neuronaux hybrides pour la modélisation de séquences : le meilleur de trois mondes

    Get PDF
    International audienceWe propose a neural architecture with the main characteristics of the most successful neural models of the last years : bidirectional RNNs, encoder-decoder, and the Transformer model. Evaluation on three sequence labelling tasks yields results that are close to the state-of-the-art for all tasks and better than it for some of them, showing the pertinence of this hybrid architecture for this kind of tasks.Nous proposons une architecture neuronale avec les caractéristiques principales des modèles neuro-naux de ces dernières années : les réseaux neuronaux récurrents bidirectionnels, les modèles encodeur-décodeur, et le modèle Transformer. Nous évaluons nos modèles sur trois tâches d'étiquetage de sé-quence, avec des résultats aux environs de l'état de l'art et souvent meilleurs, montrant ainsi l'intérêt de cette architecture hybride pour ce type de tâches

    Modélisation d'un contexte global d'étiquettes pour l'étiquetage de séquences dans les réseaux neuronaux récurrents

    Get PDF
    National audienceDuring the last few years Recurrent Neural Networks (RNN) have reached state-of-the-art performances on most sequence modeling problems. In particular the sequence to sequence model and the neural CRF have proved very effective on this class of problems. In this paper we propose an alternative RNN for sequence labelling, based on label embeddings and memory networks, which makes possible to take arbitrary long contexts into account. Our results are better than those of state-of-the-art models in most cases, and close to them in all cases. Moreover, our solution is simpler than the best models in the literature. MOTS-CLÉS : Réseaux neuronaux récurrents, contexte global, Étiquetage de séquences.Depuis quelques années, les réseaux neuronaux récurrents ont atteint des performances à l'état-de-l'art sur la plupart des problèmes de traitement de séquences. Notamment les modèles sequence to sequence et les CRF neuronaux se sont montrés particulièrement efficaces pour ce genre de problèmes. Dans cet article, nous proposons un réseau neuronal alternatif pour le même type de problèmes, basé sur l'utilisation de plongements d'étiquettes et sur des réseaux à mémoire, qui permettent la prise en compte de contextes arbitrairement longs. Nous comparons nos modèles avec la littérature, nos résultats dépassent souvent l'état-de-l'art, et ils en sont proches dans tous les cas. Nos solutions restent toutefois plus simples que les meilleurs modèles de la littérature
    • …
    corecore